Computers

您所在的位置:网站首页 designs computers Computers

Computers

2024-07-07 17:05:34| 来源: 网络整理| 查看: 265

Journals Active Journals Find a Journal Proceedings Series Topics Information For Authors For Reviewers For Editors For Librarians For Publishers For Societies For Conference Organizers Open Access Policy Institutional Open Access Program Special Issues Guidelines Editorial Process Research and Publication Ethics Article Processing Charges Awards Testimonials Author Services Initiatives Sciforum MDPI Books Preprints.org Scilit SciProfiles Encyclopedia JAMS Proceedings Series About Overview Contact Careers News Press Blog Sign In / Sign Up Notice clear Notice

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

Continue Cancel clear

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess.

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

Journals Active Journals Find a Journal Proceedings Series Topics Information For Authors For Reviewers For Editors For Librarians For Publishers For Societies For Conference Organizers Open Access Policy Institutional Open Access Program Special Issues Guidelines Editorial Process Research and Publication Ethics Article Processing Charges Awards Testimonials Author Services Initiatives Sciforum MDPI Books Preprints.org Scilit SciProfiles Encyclopedia JAMS Proceedings Series About Overview Contact Careers News Press Blog Sign In / Sign Up Submit     5.4 2.6 Journals Computers Journal Description Computers Computers is an international, scientific, peer-reviewed, open access journal of computer science, including computer and network architecture and computer–human interaction as its main foci, published monthly online by MDPI. Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions. High Visibility: indexed within Scopus, ESCI (Web of Science), dblp, Inspec, and other databases. Journal Rank: JCR - Q2 (Computer Science, Interdisciplinary Applications) / CiteScore - Q2 (Computer Networks and Communications) Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 17.2 days after submission; acceptance to publication is undertaken in 3.9 days (median values for papers published in this journal in the first half of 2024). Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done. Impact Factor: 2.6 (2023); 5-Year Impact Factor: 2.4 (2023) subject Imprint Information    get_app Journal Flyer     Open Access     ISSN: 2073-431X Latest Articles 22 pages, 4727 KiB   Open AccessArticle Hardware-Based Implementation of Algorithms for Data Replacement in Cache Memory of Processor Cores by Larysa Titarenko, Vyacheslav Kharchenko, Vadym Puidenko, Artem Perepelitsyn and Alexander Barkalov Computers 2024, 13(7), 166; https://doi.org/10.3390/computers13070166 - 5 Jul 2024 Abstract Replacement policies have an important role in the functioning of the cache memory of processor cores. The implementation of a successful policy allows us to increase the performance of the processor core and the computer system as a whole. Replacement policies are most [...] Read more. Replacement policies have an important role in the functioning of the cache memory of processor cores. The implementation of a successful policy allows us to increase the performance of the processor core and the computer system as a whole. Replacement policies are most often evaluated by the percentage of cache hits during the cycles of the processor bus when accessing the cache memory. The policies that focus on replacing the Least Recently Used (LRU) or Least Frequently Used (LFU) elements, whether instructions or data, are relevant for use. It should be noted that in the paging cache buffer, the above replacement policies can also be used to replace address information. The pseudo LRU (PLRU) policy introduces replacing based on approximate information about the age of the elements in the cache memory. The hardware implementation of any replacement policy algorithm is the circuit. This hardware part of the processor core has certain characteristics: the latency of the search process for a candidate element for replacement, the gate complexity, and the reliability. The characteristics of the PLRUt and PLRUm replacement policies are synthesized and investigated. Both are the varieties of the PLRU replacement policy, which is close to the LRU policy in terms of the percentage of cache hits. In the current study, the hardware implementation of these policies is evaluated, and the possibility of adaptation to each of the policies in the processor core according to a selected priority characteristic is analyzed. The dependency of the rise in the delay and gate complexity in the case of an increase in the associativity of the cache memory is shown. The advantage of the hardware implementation of the PLRUt algorithm in comparison with the PLRUm algorithm for higher values of associativity is shown. Full article ►▼ Show Figures

Figure 1

22 pages, 1911 KiB   Open AccessArticle Automation Bias and Complacency in Security Operation Centers by Jack Tilbury and Stephen Flowerday Computers 2024, 13(7), 165; https://doi.org/10.3390/computers13070165 - 3 Jul 2024 Abstract The volume and complexity of alerts that security operation center (SOC) analysts must manage necessitate automation. Increased automation in SOCs amplifies the risk of automation bias and complacency whereby security analysts become over-reliant on automation, failing to seek confirmatory or contradictory information. To [...] Read more. The volume and complexity of alerts that security operation center (SOC) analysts must manage necessitate automation. Increased automation in SOCs amplifies the risk of automation bias and complacency whereby security analysts become over-reliant on automation, failing to seek confirmatory or contradictory information. To identify automation characteristics that assist in the mitigation of automation bias and complacency, we investigated the current and proposed application areas of automation in SOCs and discussed its implications for security analysts. A scoping review of 599 articles from four databases was conducted. The final 48 articles were reviewed by two researchers for quality control and were imported into NVivo14. Thematic analysis was performed, and the use of automation throughout the incident response lifecycle was recognized, predominantly in the detection and response phases. Artificial intelligence and machine learning solutions are increasingly prominent in SOCs, yet support for the human-in-the-loop component is evident. The research culminates by contributing the SOC Automation Implementation Guidelines (SAIG), comprising functional and non-functional requirements for SOC automation tools that, if implemented, permit a mutually beneficial relationship between security analysts and intelligent machines. This is of practical value to human automation researchers and SOCs striving to optimize processes. Theoretically, a continued understanding of automation bias and its components is achieved. Full article ►▼ Show Figures

Figure 1

27 pages, 6430 KiB   Open AccessArticle Integrity and Privacy Assurance Framework for Remote Healthcare Monitoring Based on IoT by Salah Hamza Alharbi, Ali Musa Alzahrani, Toqeer Ali Syed and Saad Said Alqahtany Computers 2024, 13(7), 164; https://doi.org/10.3390/computers13070164 - 3 Jul 2024 Abstract Remote healthcare monitoring (RHM) has become a pivotal component of modern healthcare, offering a crucial lifeline to numerous patients. Ensuring the integrity and privacy of the data generated and transmitted by IoT devices is of paramount importance. The integration of blockchain technology and [...] Read more. Remote healthcare monitoring (RHM) has become a pivotal component of modern healthcare, offering a crucial lifeline to numerous patients. Ensuring the integrity and privacy of the data generated and transmitted by IoT devices is of paramount importance. The integration of blockchain technology and smart contracts has emerged as a pioneering solution to fortify the security of internet of things (IoT) data transmissions within the realm of healthcare monitoring. In today’s healthcare landscape, the IoT plays a pivotal role in remotely monitoring and managing patients’ well-being. Furthermore, blockchain’s decentralized and immutable ledger ensures that all IoT data transactions are securely recorded, timestamped, and resistant to unauthorized modifications. This heightened level of data security is critical in healthcare, where the integrity and privacy of patient information are nonnegotiable. This research endeavors to harness the power of blockchain and smart contracts to establish a robust and tamper-proof framework for healthcare IoT data. Employing smart contracts, which are self-executing agreements programmed with predefined rules, enables us to automate and validate data transactions within the IoT ecosystem. These contracts execute automatically when specific conditions are met, eliminating the need for manual intervention and oversight. This automation not only streamlines the process of data processing but also enhances its accuracy and reliability by reducing the risk of human error. Additionally, smart contracts provide a transparent and tamper-proof mechanism for verifying the validity of transactions, thereby mitigating the risk of fraudulent activities. By leveraging smart contracts, organizations can ensure the integrity and efficiency of data transactions within the IoT ecosystem, leading to improved trust, transparency, and security. Our experiments demonstrate the application of a blockchain approach to secure transmissions in IoT for RHM, as will be illustrated in the paper. This showcases the practical applicability of blockchain technology in real-world scenarios. Full article (This article belongs to the Section Blockchain Infrastructures and Enabled Applications) ►▼ Show Figures

Figure 1

25 pages, 437 KiB   Open AccessArticle Enhancing the Security of Classical Communication with Post-Quantum Authenticated-Encryption Schemes for the Quantum Key Distribution by Farshad Rahimi Ghashghaei, Yussuf Ahmed, Nebrase Elmrabit and Mehdi Yousefi Computers 2024, 13(7), 163; https://doi.org/10.3390/computers13070163 - 1 Jul 2024 Abstract This research aims to establish a secure system for key exchange by using post-quantum cryptography (PQC) schemes in the classic channel of quantum key distribution (QKD). Modern cryptography faces significant threats from quantum computers, which can solve classical problems rapidly. PQC schemes address [...] Read more. This research aims to establish a secure system for key exchange by using post-quantum cryptography (PQC) schemes in the classic channel of quantum key distribution (QKD). Modern cryptography faces significant threats from quantum computers, which can solve classical problems rapidly. PQC schemes address critical security challenges in QKD, particularly in authentication and encryption, to ensure the reliable communication across quantum and classical channels. The other objective of this study is to balance security and communication speed among various PQC algorithms in different security levels, specifically CRYSTALS-Kyber, CRYSTALS-Dilithium, and Falcon, which are finalists in the National Institute of Standards and Technology (NIST) Post-Quantum Cryptography Standardization project. The quantum channel of QKD is simulated with Qiskit, which is a comprehensive and well-supported tool in the field of quantum computing. By providing a detailed analysis of the performance of these three algorithms with Rivest–Shamir–Adleman (RSA), the results will guide companies and organizations in selecting an optimal combination for their QKD systems to achieve a reliable balance between efficiency and security. Our findings demonstrate that the implemented PQC schemes effectively address security challenges posed by quantum computers, while keeping the the performance similar to RSA. Full article (This article belongs to the Section ICT Infrastructures for Cybersecurity) ►▼ Show Figures

Figure 1

13 pages, 803 KiB   Open AccessArticle Bridging the Gap between Project-Oriented and Exercise-Oriented Automatic Assessment Tools by Bruno Pereira Cipriano, Bernardo Baltazar, Nuno Fachada, Athanasios Vourvopoulos and Pedro Alves Computers 2024, 13(7), 162; https://doi.org/10.3390/computers13070162 - 30 Jun 2024 Abstract In this study, we present the DP Plugin for IntelliJ IDEA, designed to extend the Drop Project (DP) Automatic Assessment Tool (AAT) by making it more suitable for handling small exercises in exercise-based learning environments. Our aim was to address the limitations of [...] Read more. In this study, we present the DP Plugin for IntelliJ IDEA, designed to extend the Drop Project (DP) Automatic Assessment Tool (AAT) by making it more suitable for handling small exercises in exercise-based learning environments. Our aim was to address the limitations of DP in supporting small assignments while retaining its strengths in project-based learning. The plugin leverages DP’s REST API to streamline the submission process, integrating assignment instructions and feedback directly within the IDE. A student survey conducted during the 2022/23 academic year revealed a positive reception, highlighting benefits such as time efficiency and ease of use. Students also provided valuable feedback, leading to various improvements that have since been integrated into the plugin. Despite these promising results, the study is limited by the relatively small percentage of survey respondents. Our findings suggest that an IDE plugin can significantly improve the usability of project-oriented AATs for small exercises, informing the development of future educational tools suitable for mixed project-based and exercise-based learning environments. Full article (This article belongs to the Special Issue Future Trends in Computer Programming Education) ►▼ Show Figures

Figure 1

17 pages, 1621 KiB   Open AccessArticle Modeling Autonomous Vehicle Responses to Novel Observations Using Hierarchical Cognitive Representations Inspired Active Inference by Sheida Nozari, Ali Krayani, Pablo Marin, Lucio Marcenaro, David Martin Gomez and Carlo Regazzoni Computers 2024, 13(7), 161; https://doi.org/10.3390/computers13070161 - 28 Jun 2024 Abstract Equipping autonomous agents for dynamic interaction and navigation is a significant challenge in intelligent transportation systems. This study aims to address this by implementing a brain-inspired model for decision making in autonomous vehicles. We employ active inference, a Bayesian approach that models decision-making [...] Read more. Equipping autonomous agents for dynamic interaction and navigation is a significant challenge in intelligent transportation systems. This study aims to address this by implementing a brain-inspired model for decision making in autonomous vehicles. We employ active inference, a Bayesian approach that models decision-making processes similar to the human brain, focusing on the agent’s preferences and the principle of free energy. This approach is combined with imitation learning to enhance the vehicle’s ability to adapt to new observations and make human-like decisions. The research involved developing a multi-modal self-awareness architecture for autonomous driving systems and testing this model in driving scenarios, including abnormal observations. The results demonstrated the model’s effectiveness in enabling the vehicle to make safe decisions, particularly in unobserved or dynamic environments. The study concludes that the integration of active inference with imitation learning significantly improves the performance of autonomous vehicles, offering a promising direction for future developments in intelligent transportation systems. Full article (This article belongs to the Special Issue System-Integrated Intelligence and Intelligent Systems 2023) 24 pages, 501 KiB   Open AccessArticle An NLP-Based Exploration of Variance in Student Writing and Syntax: Implications for Automated Writing Evaluation by Maria Goldshtein, Amin G. Alhashim and Rod D. Roscoe Computers 2024, 13(7), 160; https://doi.org/10.3390/computers13070160 - 25 Jun 2024 Abstract In writing assessment, expert human evaluators ideally judge individual essays with attention to variance among writers’ syntactic patterns. There are many ways to compose text successfully or less successfully. For automated writing evaluation (AWE) systems to provide accurate assessment and relevant feedback, they [...] Read more. In writing assessment, expert human evaluators ideally judge individual essays with attention to variance among writers’ syntactic patterns. There are many ways to compose text successfully or less successfully. For automated writing evaluation (AWE) systems to provide accurate assessment and relevant feedback, they must be able to consider similar kinds of variance. The current study employed natural language processing (NLP) to explore variance in syntactic complexity and sophistication across clusters characterized in a large corpus (n = 36,207) of middle school and high school argumentative essays. Using NLP tools, k-means clustering, and discriminant function analysis (DFA), we observed that student writers employed four distinct syntactic patterns: (1) familiar and descriptive language, (2) consistently simple noun phrases, (3) variably complex noun phrases, and (4) moderate complexity with less familiar language. Importantly, each pattern spanned the full range of writing quality; there were no syntactic patterns consistently evaluated as “good” or “bad”. These findings support the need for nuanced approaches in automated writing assessment while informing ways that AWE can participate in that process. Future AWE research can and should explore similar variability across other detectable elements of writing (e.g., vocabulary, cohesion, discursive cues, and sentiment) via diverse modeling methods. Full article (This article belongs to the Special Issue Natural Language Processing (NLP) and Large Language Modelling) 23 pages, 8049 KiB   Open AccessArticle Enhanced Security Access Control Using Statistical-Based Legitimate or Counterfeit Identification System by Aisha Edrah and Abdelkader Ouda Computers 2024, 13(7), 159; https://doi.org/10.3390/computers13070159 - 22 Jun 2024 Abstract With our increasing reliance on technology, there is a growing demand for efficient and seamless access control systems. Smartphone-centric biometric methods offer a diverse range of potential solutions capable of verifying users and providing an additional layer of security to prevent unauthorized access. [...] Read more. With our increasing reliance on technology, there is a growing demand for efficient and seamless access control systems. Smartphone-centric biometric methods offer a diverse range of potential solutions capable of verifying users and providing an additional layer of security to prevent unauthorized access. To ensure the security and accuracy of smartphone-centric biometric identification, it is crucial that the phone reliably identifies its legitimate owner. Once the legitimate holder has been successfully determined, the phone can effortlessly provide real-time identity verification for various applications. To achieve this, we introduce a novel smartphone-integrated detection and control system called Identification: Legitimate or Counterfeit (ILC), which utilizes gait cycle analysis. The ILC system employs the smartphone’s accelerometer sensor, along with advanced statistical methods, to detect the user’s gait pattern, enabling real-time identification of the smartphone owner. This approach relies on statistical analysis of measurements obtained from the accelerometer sensor, specifically, peaks extracted from the X-axis data. Subsequently, the derived feature’s probability distribution function (PDF) is computed and compared to the known user’s PDF. The calculated probability verifies the similarity between the distributions, and a decision is made with 92.18% accuracy based on a predetermined verification threshold. Full article (This article belongs to the Special Issue Wireless Sensor Network, IoT and Cloud Computing Technologies for Smart Cities) 15 pages, 606 KiB   Open AccessArticle Personalized Classifier Selection for EEG-Based BCIs by Javad Rahimipour Anaraki, Antonina Kolokolova and Tom Chau Computers 2024, 13(7), 158; https://doi.org/10.3390/computers13070158 - 21 Jun 2024 Abstract The most important component of an Electroencephalogram (EEG) Brain–Computer Interface (BCI) is its classifier, which translates EEG signals in real time into meaningful commands. The accuracy and speed of the classifier determine the utility of the BCI. However, there is significant intra- and [...] Read more. The most important component of an Electroencephalogram (EEG) Brain–Computer Interface (BCI) is its classifier, which translates EEG signals in real time into meaningful commands. The accuracy and speed of the classifier determine the utility of the BCI. However, there is significant intra- and inter-subject variability in EEG data, complicating the choice of the best classifier for different individuals over time. There is a keen need for an automatic approach to selecting a personalized classifier suited to an individual’s current needs. To this end, we have developed a systematic methodology for individual classifier selection, wherein the structural characteristics of an EEG dataset are used to predict a classifier that will perform with high accuracy. The method was evaluated using motor imagery EEG data from Physionet. We confirmed that our approach could consistently predict a classifier whose performance was no worse than the single-best-performing classifier across the participants. Furthermore, Kullback–Leibler divergences between reference distributions and signal amplitude and class label distributions emerged as the most important characteristics for classifier prediction, suggesting that classifier choice depends heavily on the morphology of signal amplitude densities and the degree of class imbalance in an EEG dataset. Full article (This article belongs to the Special Issue Machine and Deep Learning in the Health Domain 2024) 24 pages, 949 KiB   Open AccessArticle Advancing Skin Cancer Prediction Using Ensemble Models by Priya Natha and Pothuraju RajaRajeswari Computers 2024, 13(7), 157; https://doi.org/10.3390/computers13070157 - 21 Jun 2024 Abstract There are many different kinds of skin cancer, and an early and precise diagnosis is crucial because skin cancer is both frequent and deadly. The key to effective treatment is accurately classifying the various skin cancers, which have unique traits. Dermoscopy and other [...] Read more. There are many different kinds of skin cancer, and an early and precise diagnosis is crucial because skin cancer is both frequent and deadly. The key to effective treatment is accurately classifying the various skin cancers, which have unique traits. Dermoscopy and other advanced imaging techniques have enhanced early detection by providing detailed images of lesions. However, accurately interpreting these images to distinguish between benign and malignant tumors remains a difficult task. Improved predictive modeling techniques are necessary due to the frequent occurrence of erroneous and inconsistent outcomes in the present diagnostic processes. Machine learning (ML) models have become essential in the field of dermatology for the automated identification and categorization of skin cancer lesions using image data. The aim of this work is to develop improved skin cancer predictions by using ensemble models, which combine numerous machine learning approaches to maximize their combined strengths and reduce their individual shortcomings. This paper proposes a fresh and special approach for ensemble model optimization for skin cancer classification: the Max Voting method. We trained and assessed five different ensemble models using the ISIC 2018 and HAM10000 datasets: AdaBoost, CatBoost, Random Forest, Gradient Boosting, and Extra Trees. Their combined predictions enhance the overall performance with the Max Voting method. Moreover, the ensemble models were fed with feature vectors that were optimally generated from the image data by a genetic algorithm (GA). We show that, with an accuracy of 95.80%, the Max Voting approach significantly improves the predictive performance when compared to the five ensemble models individually. Obtaining the best results for F1-measure, recall, and precision, the Max Voting method turned out to be the most dependable and robust. The novel aspect of this work is that skin cancer lesions are more robustly and reliably classified using the Max Voting technique. Several pre-trained machine learning models’ benefits are combined in this approach. Full article (This article belongs to the Special Issue Machine and Deep Learning in the Health Domain 2024) attachment Supplementary material: Supplementary File 1 (ZIP, 94092 KiB) 21 pages, 4836 KiB   Open AccessArticle Chef Dalle: Transforming Cooking with Multi-Model Multimodal AI by Brendan Hannon, Yulia Kumar, J. Jenny Li and Patricia Morreale Computers 2024, 13(7), 156; https://doi.org/10.3390/computers13070156 - 21 Jun 2024 Abstract In an era where dietary habits significantly impact health, technological interventions can offer personalized and accessible food choices. This paper introduces Chef Dalle, a recipe recommendation system that leverages multi-model and multimodal human-computer interaction (HCI) techniques to provide personalized cooking guidance. The application [...] Read more. In an era where dietary habits significantly impact health, technological interventions can offer personalized and accessible food choices. This paper introduces Chef Dalle, a recipe recommendation system that leverages multi-model and multimodal human-computer interaction (HCI) techniques to provide personalized cooking guidance. The application integrates voice-to-text conversion via Whisper and ingredient image recognition through GPT-Vision. It employs an advanced recipe filtering system that utilizes user-provided ingredients to fetch recipes, which are then evaluated through multi-model AI through integrations of OpenAI, Google Gemini, Claude, and/or Anthropic APIs to deliver highly personalized recommendations. These methods enable users to interact with the system using voice, text, or images, accommodating various dietary restrictions and preferences. Furthermore, the utilization of DALL-E 3 for generating recipe images enhances user engagement. User feedback mechanisms allow for the refinement of future recommendations, demonstrating the system’s adaptability. Chef Dalle showcases potential applications ranging from home kitchens to grocery stores and restaurant menu customization, addressing accessibility and promoting healthier eating habits. This paper underscores the significance of multimodal HCI in enhancing culinary experiences, setting a precedent for future developments in the field. Full article (This article belongs to the Special Issue Harnessing Artificial Intelligence for Social and Semantic Understanding) ►▼ Show Figures

Figure 1

17 pages, 5897 KiB   Open AccessArticle A Contextual Model for Visual Information Processing by Illia Khurtin and Mukesh Prasad Computers 2024, 13(6), 155; https://doi.org/10.3390/computers13060155 - 20 Jun 2024 Abstract Despite significant achievements in the artificial narrow intelligence sphere, the mechanisms of human-like (general) intelligence are still undeveloped. There is a theory stating that the human brain extracts the meaning of information rather than recognizes the features of a phenomenon. Extracting the meaning [...] Read more. Despite significant achievements in the artificial narrow intelligence sphere, the mechanisms of human-like (general) intelligence are still undeveloped. There is a theory stating that the human brain extracts the meaning of information rather than recognizes the features of a phenomenon. Extracting the meaning is finding a set of transformation rules (context) and applying them to the incoming information, producing an interpretation. Then, the interpretation is compared to something already seen and is stored in memory. Information can have different meanings in different contexts. A mathematical model of a context processor and a differential contextual space which can perform the interpretation is discussed and developed in this paper. This study examines whether the basic principles of differential contextual spaces work in practice. The model is developed with Rust programming language and trained on black and white images which are rotated and shifted both horizontally and vertically according to the saccades and torsion movements of a human eye. Then, a picture that has never been seen in the particular transformation, but has been seen in another one, is exposed to the model. The model considers the image in all known contexts and extracts the meaning. The results show that the program can successfully process black and white images which are transformed by shifts and rotations. This research prepares the grounding for further investigations of the contextual model principles with which general intelligence might operate. Full article (This article belongs to the Special Issue Feature Papers in Computers 2024) ►▼ Show Figures

Figure 1

27 pages, 6904 KiB   Open AccessArticle Deep Convolutional Generative Adversarial Networks in Image-Based Android Malware Detection by Francesco Mercaldo, Fabio Martinelli and Antonella Santone Computers 2024, 13(6), 154; https://doi.org/10.3390/computers13060154 - 19 Jun 2024 Abstract The recent advancements in generative adversarial networks have showcased their remarkable ability to create images that are indistinguishable from real ones. This has prompted both the academic and industrial communities to tackle the challenge of distinguishing fake images from genuine ones. We introduce [...] Read more. The recent advancements in generative adversarial networks have showcased their remarkable ability to create images that are indistinguishable from real ones. This has prompted both the academic and industrial communities to tackle the challenge of distinguishing fake images from genuine ones. We introduce a method to assess whether images generated by generative adversarial networks, using a dataset of real-world Android malware applications, can be distinguished from actual images. Our experiments involved two types of deep convolutional generative adversarial networks, and utilize images derived from both static analysis (which does not require running the application) and dynamic analysis (which does require running the application). After generating the images, we trained several supervised machine learning models to determine if these classifiers can differentiate between real and generated malicious applications. Our results indicate that, despite being visually indistinguishable to the human eye, the generated images were correctly identified by a classifier with an F-measure of approximately 0.8. While most generated images were accurately recognized as fake, some were not, leading them to be considered as images produced by real applications. Full article (This article belongs to the Special Issue Current Issue and Future Directions in Multimedia Hiding and Signal Processing) ►▼ Show Figures

Figure 1

24 pages, 4449 KiB   Open AccessArticle Empowering Communication: A Deep Learning Framework for Arabic Sign Language Recognition with an Attention Mechanism by R. S. Abdul Ameer, M. A. Ahmed, Z. T. Al-Qaysi, M. M. Salih and Moceheb Lazam Shuwandy Computers 2024, 13(6), 153; https://doi.org/10.3390/computers13060153 - 19 Jun 2024 Abstract This article emphasises the urgent need for appropriate communication tools for communities of people who are deaf or hard-of-hearing, with a specific emphasis on Arabic Sign Language (ArSL). In this study, we use long short-term memory (LSTM) models in conjunction with MediaPipe to [...] Read more. This article emphasises the urgent need for appropriate communication tools for communities of people who are deaf or hard-of-hearing, with a specific emphasis on Arabic Sign Language (ArSL). In this study, we use long short-term memory (LSTM) models in conjunction with MediaPipe to reduce the barriers to effective communication and social integration for deaf communities. The model design incorporates LSTM units and an attention mechanism to handle the input sequences of extracted keypoints from recorded gestures. The attention layer selectively directs its focus toward relevant segments of the input sequence, whereas the LSTM layer handles temporal relationships and encodes the sequential data. A comprehensive dataset comprised of fifty frequently used words and numbers in ArSL was collected for developing the recognition model. This dataset comprises many instances of gestures recorded by five volunteers. The results of the experiment support the effectiveness of the proposed approach, as the model achieved accuracies of more than 85% (individual volunteers) and 83% (combined data). The high level of precision emphasises the potential of artificial intelligence-powered translation software to improve effective communication for people with hearing impairments and to enable them to interact with the larger community more easily. Full article (This article belongs to the Special Issue Advanced Image Processing and Computer Vision) ►▼ Show Figures

Figure 1

40 pages, 4392 KiB   Open AccessReview Hybrid Architectures Used in the Protection of Large Healthcare Records Based on Cloud and Blockchain Integration: A Review by Leonardo Juan Ramirez Lopez, David Millan Mayorga, Luis Hernando Martinez Poveda, Andres Felipe Carbonell Amaya and Wilson Rojas Reales Computers 2024, 13(6), 152; https://doi.org/10.3390/computers13060152 - 12 Jun 2024 Abstract The management of large medical files poses a critical challenge in the health sector, with conventional systems facing deficiencies in security, scalability, and efficiency. Blockchain ensures the immutability and traceability of medical records, while the cloud allows scalable and efficient storage. Together, they [...] Read more. The management of large medical files poses a critical challenge in the health sector, with conventional systems facing deficiencies in security, scalability, and efficiency. Blockchain ensures the immutability and traceability of medical records, while the cloud allows scalable and efficient storage. Together, they can transform the data management of electronic health record applications. The method used was the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) methodology to choose and select the relevant studies that contribute to this research, with special emphasis set on maintaining the integrity and security of the blockchain while tackling the potential and efficiency of cloud infrastructures. The study’s focus is to provide a comprehensive and insightful examination of the modern landscape concerning the integration of blockchain and cloud advances, highlighting the current challenges and building a solid foundation for future development. Furthermore, it is very important to increase the integration of blockchain security with the dynamic potential of cloud computing while guaranteeing information integrity and security remain uncompromised. In conclusion, this paper serves as an important resource for analysts, specialists, and partners looking to delve into and develop the integration of blockchain and cloud innovations. Full article (This article belongs to the Section Cloud Continuum and Enabled Applications) ►▼ Show Figures

Figure 1

27 pages, 1367 KiB   Open AccessArticle InfoSTGCAN: An Information-Maximizing Spatial-Temporal Graph Convolutional Attention Network for Heterogeneous Human Trajectory Prediction by Kangrui Ruan and Xuan Di Computers 2024, 13(6), 151; https://doi.org/10.3390/computers13060151 - 11 Jun 2024 Abstract Predicting the future trajectories of multiple interacting pedestrians within a scene has increasingly gained importance in various fields, e.g., autonomous driving, human–robot interaction, and so on. The complexity of this problem is heightened due to the social dynamics among different pedestrians and their [...] Read more. Predicting the future trajectories of multiple interacting pedestrians within a scene has increasingly gained importance in various fields, e.g., autonomous driving, human–robot interaction, and so on. The complexity of this problem is heightened due to the social dynamics among different pedestrians and their heterogeneous implicit preferences. In this paper, we present Information Maximizing Spatial-Temporal Graph Convolutional Attention Network (InfoSTGCAN), which takes into account both pedestrian interactions and heterogeneous behavior choice modeling. To effectively capture the complex interactions among pedestrians, we integrate spatial-temporal graph convolution and spatial-temporal graph attention. For grasping the heterogeneity in pedestrians’ behavior choices, our model goes a step further by learning to predict an individual-level latent code for each pedestrian. Each latent code represents a distinct pattern of movement choice. Finally, based on the observed historical trajectory and the learned latent code, the proposed method is trained to cover the ground-truth future trajectory of this pedestrian with a bi-variate Gaussian distribution. We evaluate the proposed method through a comprehensive list of experiments and demonstrate that our method outperforms all baseline methods on the commonly used metrics, Average Displacement Error and Final Displacement Error. Notably, visualizations of the generated trajectories reveal our method’s capacity to handle different scenarios. Full article ►▼ Show Figures

Figure 1

13 pages, 830 KiB   Open AccessArticle On Predicting Exam Performance Using Version Control Systems’ Features by Lorenzo Canale, Luca Cagliero, Laura Farinetti and Marco Torchiano Computers 2024, 13(6), 150; https://doi.org/10.3390/computers13060150 - 9 Jun 2024 Abstract The advent of Version Control Systems (VCS) in computer science education has significantly improved the learning experience. The Learning Analytics community has started to analyze the interactions between students and VCSs to evaluate the behavioral and cognitive aspects of the learning process. Within [...] Read more. The advent of Version Control Systems (VCS) in computer science education has significantly improved the learning experience. The Learning Analytics community has started to analyze the interactions between students and VCSs to evaluate the behavioral and cognitive aspects of the learning process. Within the aforesaid scope, a promising research direction is the use of Artificial Intelligence (AI) to predict students’ exam outcomes early based on VCS usage data. Previous AI-based solutions have two main drawbacks: (i) They rely on static models, which disregard temporal changes in the student–VCS interactions. (ii) AI reasoning is not transparent to end-users. This paper proposes a time-dependent approach to early predict student performance from VCS data. It applies and compares different classification models trained at various course stages. To gain insights into exam performance predictions it combines classification with explainable AI techniques. It visualizes the explanations of the time-varying performance predictors. The results of a real case study show that the effect of VCS-based features on the exam success rate is relevant much earlier than the end of the course, whereas the timely submission of the first lab assignment is a reliable predictor of the exam grade. Full article (This article belongs to the Special Issue Recent Advances in Computer-Assisted Learning) ►▼ Show Figures

Figure 1

15 pages, 442 KiB   Open AccessArticle A Clustering and PL/SQL-Based Method for Assessing MLP-Kmeans Modeling by Victor Hugo Silva-Blancas, Hugo Jiménez-Hernández, Ana Marcela Herrera-Navarro, José M. Álvarez-Alvarado, Diana Margarita Córdova-Esparza and Juvenal Rodríguez-Reséndiz Computers 2024, 13(6), 149; https://doi.org/10.3390/computers13060149 - 9 Jun 2024 Abstract With new high-performance server technology in data centers and bunkers, optimizing search engines to process time and resource consumption efficiently is necessary. The database query system, upheld by the standard SQL language, has maintained the same functional design since the advent of PL/SQL. [...] Read more. With new high-performance server technology in data centers and bunkers, optimizing search engines to process time and resource consumption efficiently is necessary. The database query system, upheld by the standard SQL language, has maintained the same functional design since the advent of PL/SQL. This situation is caused by recent research focused on computer resource management, encryption, and security rather than improving data mining based on AI tools, machine learning (ML), and artificial neural networks (ANNs). This work presents a projected methodology integrating a multilayer perceptron (MLP) with Kmeans. This methodology is compared with traditional PL/SQL tools and aims to improve the database response time while outlining future advantages for ML and Kmeans in data processing. We propose a new corollary: hk→H=SSE(C),wherek>0and∃X, executed on application software querying data collections with more than 306 thousand records. This study produced a comparative table between PL/SQL and MLP-Kmeans based on three hypotheses: line query, group query, and total query. The results show that line query increased to 9 ms, group query increased from 88 to 2460 ms, and total query from 13 to 279 ms. Testing one methodology against the other not only shows the incremental fatigue and time consumption that training brings to database query but also that the complexity of the use of a neural network is capable of producing more precision results than the simple use of PL/SQL instructions, and this will be more important in the future for domain-specific problems. Full article ►▼ Show Figures

Figure 1

28 pages, 6199 KiB   Open AccessArticle ARPocketLab—A Mobile Augmented Reality System for Pedagogic Applications by Miguel Nunes, Telmo Adão, Somayeh Shahrabadi, António Capela, Diana Carneiro, Pedro Branco, Luís Magalhães, Raul Morais and Emanuel Peres Computers 2024, 13(6), 148; https://doi.org/10.3390/computers13060148 - 8 Jun 2024 Abstract The widespread adoption of digital technologies in educational systems has been globally reflecting a shift in pedagogic content delivery that seems to fit modern generations of students while tackling relevant challenges faced by the current scholar context, e.g., progress traceability, pedagogic content fair [...] Read more. The widespread adoption of digital technologies in educational systems has been globally reflecting a shift in pedagogic content delivery that seems to fit modern generations of students while tackling relevant challenges faced by the current scholar context, e.g., progress traceability, pedagogic content fair access and intuitive visual representativeness, mobility issue mitigation, and sustainability in crisis situations. Among these technologies, augmented reality (AR) emerges as a particularly promising approach, allowing the visualization of computer-generated interactive data on top of real-world elements, thus enhancing comprehension and intuition regarding educational content, often in mobile settings. While the application of AR to education has been widely addressed, issues related to performance interaction and cognitive performance are commonly addressed, with lesser attention paid to the limitations associated with setup complexity, mostly related to experiences configurating tools, or contextual range, i.e., technical/scientific domain targeting versatility. Therefore, this paper introduces ARPocketLab, a digital, mobile, flexible, and scalable solution designed for the dynamic needs of modern tutorship. With a dual-interface system, it allows both educators and students to interactively design and engage with AR content directly tied to educational outcomes. Moreover, ARPocketLab’s design, aimed at handheld operationalization using a minimal set of physical resources, is particularly relevant in environments where educational materials are scarce or in situations where remote learning becomes necessary. Its versatility stems from the fact that it only requires a marker or a surface (e.g., a table) to function at full capacity. To evaluate the solution, tests were conducted with 8th-grade Portuguese students within the context of Physics and Chemistry subject. Results demonstrate the application’s effectiveness in providing didactic assistance, with positive feedback not only in terms of usability but also regarding learning performance. The participants also reported openness for the adoption of AR in pedagogic contexts. Full article (This article belongs to the Special Issue Extended or Mixed Reality (AR + VR): Technology and Applications) ►▼ Show Figures

Figure 1

17 pages, 714 KiB   Open AccessReview Integrating Machine Learning with Non-Fungible Tokens by Elias Iosif and Leonidas Katelaris Computers 2024, 13(6), 147; https://doi.org/10.3390/computers13060147 - 7 Jun 2024 Abstract In this paper, we undertake a thorough comparative examination of data resources pertinent to Non-Fungible Tokens (NFTs) within the framework of Machine Learning (ML). The core research question of the present work is how the integration of ML techniques and NFTs manifests across [...] Read more. In this paper, we undertake a thorough comparative examination of data resources pertinent to Non-Fungible Tokens (NFTs) within the framework of Machine Learning (ML). The core research question of the present work is how the integration of ML techniques and NFTs manifests across various domains. Our primary contribution lies in proposing a structured perspective for this analysis, encompassing a comprehensive array of criteria that collectively span the entire spectrum of NFT-related data. To demonstrate the application of the proposed perspective, we systematically survey a selection of indicative research works, drawing insights from diverse sources. By evaluating these data resources against established criteria, we aim to provide a nuanced understanding of their respective strengths, limitations, and potential applications within the intersection of NFTs and ML. Full article (This article belongs to the Special Issue When Blockchain Meets IoT: Challenges and Potentials) ►▼ Show Figures

Figure 1

More Articles... computers-logo Submit to Computers Review for Computers Share Journal Menu ► ▼ Journal Menu Computers Home Aims & Scope Editorial Board Reviewer Board Instructions for Authors Special Issues Topics Sections Article Processing Charge Indexing & Archiving Most Cited & Viewed Journal Statistics Journal History Journal Awards Conferences Editorial Office Journal Browser ► ▼ Journal Browser arrow_forward_ios Forthcoming issue arrow_forward_ios Current issue Vol. 13 (2024) Vol. 12 (2023) Vol. 11 (2022) Vol. 10 (2021) Vol. 9 (2020) Vol. 8 (2019) Vol. 7 (2018) Vol. 6 (2017) Vol. 5 (2016) Vol. 4 (2015) Vol. 3 (2014) Vol. 2 (2013) Vol. 1 (2012) Highly Accessed Articles View More... Latest Books More Books and Reprints... E-Mail Alert News 20 June 2024 2023 Impact Factors for MDPI Journals Released 7 June 2024 MDPI Calls for Greater Open Access to Science for Ocean Protection 5 June 2024 MDPI Sets a New Benchmark for Publishing Excellence More News & Announcements... Topics Propose a Topic Topic in Applied Sciences, Computers, Digital, Electronics, Smart Cities Artificial Intelligence Models, Tools and Applications Topic Editors: Phivos Mylonas, Katia Lida Kermanidis, Manolis MaragoudakisDeadline: 31 August 2024 Topic in Biomedicines, Computers, Information, IJERPH, JPM eHealth and mHealth: Challenges and Prospects, 2nd Volume Topic Editors: Antonis Billis, Manuel Dominguez-Morales, Anton CivitDeadline: 30 September 2024 Topic in Applied Sciences, Computers, Electronics, JSAN, Technologies Emerging AI+X Technologies including Selected Papers from ICGHIT 2024 Topic Editors: Byung-Seo Kim, Hyunsik Ahn, Kyu-Tae LeeDeadline: 31 October 2024 Topic in Computers, Informatics, Information, Logistics, Mathematics, Algorithms Decision Science Applications and Models (DSAM) Topic Editors: Daniel Riera Terrén, Angel A. Juan, Majsa Ammuriova, Laura CalvetDeadline: 31 December 2024 More Topics loading... Conferences Announce Your Conference 22–26 November 2024 2024 International Conference on Science and Engineering of Electronics (ICSEE'2024) More Conferences... Special Issues Propose a Special Issue Special Issue in Computers Xtended or Mixed Reality (AR+VR) for Education 2024 Guest Editors: Veronica Rossano, Michele FiorentinoDeadline: 1 August 2024 Special Issue in Computers Best Practices, Challenges and Opportunities in Software Engineering Guest Editor: Yan LiuDeadline: 31 August 2024 Special Issue in Computers Uncertainty-Aware Artificial Intelligence Guest Editors: Hussain Mohammed Dipu Kabir, Syed Bahauddin Alam, Subrota Kumar Mondal, Jeremy StraubDeadline: 30 September 2024 Special Issue in Computers Smart Learning Environments Guest Editor: Ananda MaitiDeadline: 31 October 2024 More Special Issues Computers, EISSN 2073-431X, Published by MDPI RSS Content Alert Further Information Article Processing Charges Pay an Invoice Open Access Policy Contact MDPI Jobs at MDPI Guidelines For Authors For Reviewers For Editors For Librarians For Publishers For Societies For Conference Organizers MDPI Initiatives Sciforum MDPI Books Preprints.org Scilit SciProfiles Encyclopedia JAMS Proceedings Series Follow MDPI LinkedIn Facebook Twitter MDPI © 1996-2024 MDPI (Basel, Switzerland) unless otherwise stated Disclaimer Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. Terms and Conditions Privacy Policy We use cookies on our website to ensure you get the best experience. Read more about our cookies here. Accept Share Link Copy clear Share https://www.mdpi.com/journal/computers clear Back to TopTop


【本文地址】

公司简介

联系我们

今日新闻


点击排行

实验室常用的仪器、试剂和
说到实验室常用到的东西,主要就分为仪器、试剂和耗
不用再找了,全球10大实验
01、赛默飞世尔科技(热电)Thermo Fisher Scientif
三代水柜的量产巅峰T-72坦
作者:寞寒最近,西边闹腾挺大,本来小寞以为忙完这
通风柜跟实验室通风系统有
说到通风柜跟实验室通风,不少人都纠结二者到底是不
集消毒杀菌、烘干收纳为一
厨房是家里细菌较多的地方,潮湿的环境、没有完全密
实验室设备之全钢实验台如
全钢实验台是实验室家具中较为重要的家具之一,很多

推荐新闻


图片新闻

实验室药品柜的特性有哪些
实验室药品柜是实验室家具的重要组成部分之一,主要
小学科学实验中有哪些教学
计算机 计算器 一般 打孔器 打气筒 仪器车 显微镜
实验室各种仪器原理动图讲
1.紫外分光光谱UV分析原理:吸收紫外光能量,引起分
高中化学常见仪器及实验装
1、可加热仪器:2、计量仪器:(1)仪器A的名称:量
微生物操作主要设备和器具
今天盘点一下微生物操作主要设备和器具,别嫌我啰嗦
浅谈通风柜使用基本常识
 众所周知,通风柜功能中最主要的就是排气功能。在

专题文章

    CopyRight 2018-2019 实验室设备网 版权所有 win10的实时保护怎么永久关闭